32 research outputs found

    Low power LVDS transceiver for AER links with burst mode operation capability

    Get PDF
    This paper presents the design and simulation of an LVDS transceiver intended to be used in serial AER links. Traditional implementations of LVDS serial interfaces require a continuous data flow between the transmitter and the receiver to keep the synchronization. However, the serial AER-LVDS interface proposed in [2] operates in a burst mode, having long times of silence without data transmission. This can be used to reduce the power consumption by switching off the LVDS circuitry during the pauses. Moreover, a fast recovery time after pauses must be achieved to not slow down the interface operation. The transceiver was designed in a 90 nm technology. Extensive simulations have been performed demonstrating a 1 Gbps data rate operation for all corners in post-layout simulations. Driver and receiver take up an area of 100x215 m2 and 100x140 m2 respectively.Unión Europea 216777 (NABAB)Ministerio de Ciencia y Tecnología TEC2006-11730-C03-01 (SAMANTA II)Junta de Andalucía P06-TIC-0141

    OTA-C oscillator with low frequency variations for on-chip clock generation in serial LVDS-AER links

    Get PDF
    This paper presents the design and simulation of an OTA-C oscillator intended to be used as on-chip frequency reference. This reference will be part of the high speed clock generation circuit for Manchester serial LVDS-AER links. A Manchester LVDS receiver can adapt its operation in a limited range of frequencies, so the most important specification is the frequency stability over temperature and process variations. A novel design methodology is presented to design two oscillators in a 90 nm technology using transistors with 2.5 V supply voltage. Intensive simulations with temperature, process, supply voltage variations and mismatch effects were performed in order to analyze the validity of this approach, obtaining Delta ap 7%.European Union 216777 (NABAB)Ministerio de Educación y Ciencia TEC2006-11730-C03-01Junta de Andalucía P06-TIC-0141

    An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

    Get PDF
    Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 m CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.Unión Europea 216777 (NABAB)Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    Voltage Mode Driver for Low Power Transmission of High Speed Serial AER Links

    Get PDF
    This paper presents a voltage-mode high speed driver to transmit serial AER data in scalable multi-chip AER systems. To take advantage of the asynchronous nature of AER (Address Event Representation) streams, this implementation allows an energy efficient burst-mode operation. This is achieved by switching on/off the driver in data pauses to reduce static power consumption. Impedance matching is calibrated continuously to track temperature variations, obtaining an optimal performance without degrading the data rate. Power management techniques for switching drivers are discussed and an internally compensated high speed regulator is presented. The system has been designed in a 0.35μm CMOS technology to transmit data rates up to 500Mbps using Manchester enconding. Layout extracted simulation results are presented, which include all interconnect parasitics. Estimated peak rate is 15Meps for 32 bit events. Simulated power consumption of transmitter and receiver at peak rate is 33.2mW, while below 100 Keps is 1.3mW.European Union 216777 (NABAB)Ministerio de Educación y Ciencia TEC2006-11730-C03-01Ministerio de Ciencia e Innovación TEC2009-10639-C04-01Junta de Andalucía P06-TIC-0141

    On Spike-Timing-Dependent-Plasticity, Memristive Devices, and Building a Self-Learning Visual Cortex

    Get PDF
    In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site1

    Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    Get PDF
    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons

    Modular and scalable implementation of AER neuromorphic systems

    Get PDF
    Memoria presentada por Carlos Zamarreño Ramos para optar al grado de Doctor por la Universidad de Sevilla, Departamento de electrónica y Electromagnetismo.Es hora de agradecer al Ministerio de Educación y Ciencia a traves de las becas FPU (Formación del Profesorado Universitario) por su apoyo financiero durante estos ultimos cuatro años. De la misma forma, es necesario tambien nombrar al CSIC (Consejo Superior de Investigaciones Científicas) por el apoyo brindado a traves de las becas de iniciación a la investigación que disfrute durante mi penultimo y ultimo año de los estudios de Ingeniera de Telecomunicación.Peer Reviewe

    vmserdes

    No full text
    Serial AER interface with switchable voltage mode drivers capable of transmitting asynchronous AER data with high energy efficiency. Tecnología: AMS 0.35µm 4 metales[EN]: The VULCANO project aims at exploiting the great potential of the AER (Address Event Representation) technology for very high speed vision sensing and processing as well as for mechanical actuators and motors control in Neuro-robotics. In conventional vision, a video camera captures sequences of still frames or images, each of which has to be processed by sophisticated algorithms if automatic recognition tasks are desired (in automotion, robotics, etc.). AER is based on a different concept which mimics the structure and information coding of the brain. In AER each sensor pixel sends information events when it detects a given level of a visual property (motion, contrast, luminance, ...). This way, the sensor output is a continuous flow of information (spatial and temporal) which is not restricted to discrete frames. This continuous flow of visual information is sent to a hierarchical structure, which mimics the neurocortex, and extracts relevant information in a continuous and parallel manner event after event, without waiting for frames. The AER philosophy allows to assemble scalable neurocortical systems: for example, to augment the catalog of known objects one only needs to add more AER modules in parallel which does not degrade speed (as happens in the brain).[ES]: En el proyecto VULCANO se pretende explotar el gran potencial de la tecnología AER (Address Event Representation) para sensar y procesar visión a muy alta velocidad, así como para actuar sobre los subsistemas de percepción y accionamiento en el ámbito de la Neuro-robótica. En visión convencional, una cámara de video capta secuencias de fotogramas, cada uno de los cuales debe ser tratado por sofisticados algoritmos si se quieren realizar tareas procesamiento y reconocimiento automático (en automoción, robótica, etc.). AER se basa en un concepto diferente, imitando la estructura y codificación de información del cerebro. En AER cada pixel del sensor emite eventos de información cuando 'capta' un determinado nivel de alguna propiedad visual (movimiento, contraste, luminosidad, ...). De esta manera, la salida del sensor es un flujo continuo de información (espacial y temporal) que no está restringida a fotogramas discretos. Este flujo de información visual se lleva a una estructura jerárquica, que imita la corteza cerebral, y que va extrayendo información relevante de una manera continua y paralela evento tras evento, sin esperar a fotogramas. La filosofía AER permite construir sistemas neurocorticales escalables: por ejemplo, para aumentar el catálogo de objetos a reconocer se añaden más módulos AER en paralelo sin que el sistema pierda velocidad (al igual que ocurre en el cerebro).Proyecto VULCANO, Ref. TEC2009-10639-C04-01 (2010-2013), Ministerio de Ciencia e InnovaciónN

    quad lvds 500M

    No full text
    AER-LVDS quad at 500Mbps rateEl objetivo del proyecto es construir un sistema robótico que incluya sensado procesamiento y actuación motora en forma de impulsos nerviosos tal como se hace en los sistemas biológicos. Para la construcción de este sistema se integrarán retinas y procesadores bioinspirados desarrollados en el IMSE, con algoritmos de actuación motora implementados en FPGA de la Universidad de Sevilla en un robot humanoide ambidiestro de la Universidad Politécnica de CartagenaProyecto SAMANTA II, Ref. TEC2006-11730-C03-01/MIC (2006-2010), Ministerio de Educación y CienciaN

    cmserdes

    No full text
    Serial AER interface with switchable current mode drivers capable of transmitting asynchronous AER data with high energy efficiency[EN]: The VULCANO project aims at exploiting the great potential of the AER (Address Event Representation) technology for very high speed vision sensing and processing as well as for mechanical actuators and motors control in Neuro-robotics. In conventional vision, a video camera captures sequences of still frames or images, each of which has to be processed by sophisticated algorithms if automatic recognition tasks are desired (in automotion, robotics, etc.). AER is based on a different concept which mimics the structure and information coding of the brain. In AER each sensor pixel sends information events when it detects a given level of a visual property (motion, contrast, luminance, ...). This way, the sensor output is a continuous flow of information (spatial and temporal) which is not restricted to discrete frames. This continuous flow of visual information is sent to a hierarchical structure, which mimics the neurocortex, and extracts relevant information in a continuous and parallel manner event after event, without waiting for frames. The AER philosophy allows to assemble scalable neurocortical systems: for example, to augment the catalog of known objects one only needs to add more AER modules in parallel which does not degrade speed (as happens in the brain).[ES]: En el proyecto VULCANO se pretende explotar el gran potencial de la tecnología AER (Address Event Representation) para sensar y procesar visión a muy alta velocidad, así como para actuar sobre los subsistemas de percepción y accionamiento en el ámbito de la Neuro-robótica. En visión convencional, una cámara de video capta secuencias de fotogramas, cada uno de los cuales debe ser tratado por sofisticados algoritmos si se quieren realizar tareas procesamiento y reconocimiento automático (en automoción, robótica, etc.). AER se basa en un concepto diferente, imitando la estructura y codificación de información del cerebro. En AER cada pixel del sensor emite eventos de información cuando 'capta' un determinado nivel de alguna propiedad visual (movimiento, contraste, luminosidad, ...). De esta manera, la salida del sensor es un flujo continuo de información (espacial y temporal) que no está restringida a fotogramas discretos. Este flujo de información visual se lleva a una estructura jerárquica, que imita la corteza cerebral, y que va extrayendo información relevante de una manera continua y paralela evento tras evento, sin esperar a fotogramas. La filosofía AER permite construir sistemas neurocorticales escalables: por ejemplo, para aumentar el catálogo de objetos a reconocer se añaden más módulos AER en paralelo sin que el sistema pierda velocidad (al igual que ocurre en el cerebro).Proyecto VULCANO, Ref. TEC2009-10639-C04-01 (2010-2013), Ministerio de Ciencia e InnovaciónN
    corecore